Weakly-Supervised Semantic Segmentation Using Motion Cues

نویسندگان

  • Pavel Tokmakov
  • Karteek Alahari
  • Cordelia Schmid
چکیده

Fully convolutional neural networks (FCNNs) trained on a large number of images with strong pixel-level annotations have become the new state of the art for the semantic segmentation task. While there have been recent attempts to learn FCNNs from image-level weak annotations, they need additional constraints, such as the size of an object, to obtain reasonable performance. To address this issue, we present motion-CNN (M-CNN), a novel FCNN framework which incorporates motion cues and is learned from video-level weak annotations. Our learning scheme to train the network uses motion segments as soft constraints, thereby handling noisy motion information. When trained on weakly-annotated videos, our method outperforms the state-of-the-art approach [28] on the PASCAL VOC 2012 image segmentation benchmark. We also demonstrate that the performance of M-CNN learned with 150 weak video annotations is on par with state-of-the-art weakly-supervised methods trained with thousands of images. Finally, M-CNN substantially outperforms recent approaches in a related task of video co-localization on the YouTube-Objects dataset. This is an extended version of our ECCV paper [39].

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Superpixel clustering with deep features for unsupervised road segmentation

Vision-based autonomous driving requires classifying each pixel as corresponding to road or not, which can be addressed using semantic segmentation. Semantic segmentation works well when used with a fully supervised model, but in practice, the required work of creating pixel-wise annotations is very expensive. Although weakly supervised segmentation addresses this issue, most methods are not de...

متن کامل

Discovering Class-Specific Pixels for Weakly-Supervised Semantic Segmentation

We propose an approach to discover class-specific pixels for the weakly-supervised semantic segmentation task. We show that properly combining saliency and attention maps allows us to obtain reliable cues capable of significantly boosting the performance. First, we propose a simple yet powerful hierarchical approach to discover the classagnostic salient regions, obtained using a salient object ...

متن کامل

Weakly Supervised Semantic Segmentation Using Superpixel Pooling Network

We propose a weakly supervised semantic segmentation algorithm based on deep neural networks, which relies on imagelevel class labels only. The proposed algorithm alternates between generating segmentation annotations and learning a semantic segmentation network using the generated annotations. A key determinant of success in this framework is the capability to construct reliable initial annota...

متن کامل

Sparse Reconstruction for Weakly Supervised Semantic Segmentation

We propose a novel approach to semantic segmentation using weakly supervised labels. In traditional fully supervised methods, superpixel labels are available for training; however, it is not easy to obtain enough labeled superpixels to learn a satisfying model for semantic segmentation. By contrast, only image-level labels are necessary in weakly supervised methods, which makes them more practi...

متن کامل

Amortized Inference and Learning in Latent Conditional Random Fields for Weakly-Supervised Semantic Image Segmentation

Conditional random fields (CRFs) are commonly employed as a post-processing tool for image segmentation tasks. The unary potentials of the CRF are often learnt independently by a classifier, thereby decoupling the inference in CRF from the training of classifier. Such a scheme works effectively, when pixel-level labelling is available for all the images. However, in absence of pixel-level label...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016